Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 702
Filtrar
1.
bioRxiv ; 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38559145

RESUMO

Multi-modal imaging analyses of dosed tissue samples can provide more comprehensive insight into the effects of a therapeutically active compound on a target tissue compared to single-modal imaging. For example, simultaneous spatial mapping of pharmaceutical compounds and endogenous macromolecule receptors is difficult to achieve in a single imaging experiment. Herein, we present a multi-modal workflow combining imaging mass spectrometry with immunohistochemistry (IHC) fluorescence imaging and brightfield microscopy imaging. Imaging mass spectrometry enables direct mapping of pharmaceutical compounds and metabolites, IHC fluorescence imaging can visualize large proteins, and brightfield microscopy imaging provides tissue morphology information. Single-cell resolution images are generally difficult to acquire using imaging mass spectrometry, but are readily acquired with IHC fluorescence and brightfield microscopy imaging. Spatial sharpening of mass spectrometry images would thus allow for higher fidelity co-registration with higher resolution microscopy images. Imaging mass spectrometry spatial resolution can be predicted to a finer value via a computational image fusion workflow, which models the relationship between the intensity values in the mass spectrometry image and the features of a high spatial resolution microscopy image. As a proof of concept, our multi-modal workflow was applied to brain tissue extracted from a Sprague Dawley rat dosed with a kratom alkaloid, corynantheidine. Four candidate mathematical models including linear regression, partial least squares regression (PLS), random forest regression, and two-dimensional convolutional neural network (2-D CNN), were tested. The random forest and 2-D CNN models most accurately predicted the intensity values at each pixel as well as the overall patterns of the mass spectrometry images, while also providing the best spatial resolution enhancements. Herein, image fusion enabled predicted mass spectrometry images of corynantheidine, GABA, and glutamine to approximately 2.5 µm spatial resolutions, a significant improvement compared to the original images acquired at 25 µm spatial resolution. The predicted mass spectrometry images were then co-registered with an H&E image and IHC fluorescence image of the µ-opioid receptor to assess co-localization of corynantheidine with brain cells. Our study also provides insight into the different evaluation parameters to consider when utilizing image fusion for biological applications.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38644712

RESUMO

BACKGROUND: Diseases are medical situations that are allied with specific signs and symptoms. A disease may be instigated by internal dysfunction or external factors like pathogens. Cerebrovascular disease can progress from diverse causes, comprising thrombosis, atherosclerosis, cerebral venous thrombosis, or embolic arterial blood clot. OBJECTIVE: In this paper, authors have proposed a robust framework for the detection of cerebrovascular diseases employing two different proposals which were validated by use of other dataset. METHODS: In proposed model 1, the Discrete Fourier transform is used for the fusion of CT and MR images which was classified them using machine learning techniques and pre-trained models while in proposed model 2, the cascaded model was proposed. The performance evaluation parameters like accuracy and losses were evaluated. RESULTS: 92% accuracy was obtained using Support Vector Machine using Gray Level Difference Statistics and Shape features with Principal Component Analysis as a feature selection technique while Inception V3 resulted in 95.6% accuracy while the cascaded model resulted in 96.21% accuracy. CONCLUSION: The cascaded model is later validated on other datasets which results in 0.11% and 0.14% accuracy improvement for TCIA and BRaTS datasets respectively.

3.
Comput Biol Med ; 173: 108381, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569237

RESUMO

Multimodal medical image fusion (MMIF) technology plays a crucial role in medical diagnosis and treatment by integrating different images to obtain fusion images with comprehensive information. Deep learning-based fusion methods have demonstrated superior performance, but some of them still encounter challenges such as imbalanced retention of color and texture information and low fusion efficiency. To alleviate the above issues, this paper presents a real-time MMIF method, called a lightweight residual fusion network. First, a feature extraction framework with three branches is designed. Two independent branches are used to fully extract brightness and texture information. The fusion branch enables different modal information to be interactively fused at a shallow level, thereby better retaining brightness and texture information. Furthermore, a lightweight residual unit is designed to replace the conventional residual convolution in the model, thereby improving the fusion efficiency and reducing the overall model size by approximately 5 times. Finally, considering that the high-frequency image decomposed by the wavelet transform contains abundant edge and texture information, an adaptive strategy is proposed for assigning weights to the loss function based on the information content in the high-frequency image. This strategy effectively guides the model toward preserving intricate details. The experimental results on MRI and functional images demonstrate that the proposed method exhibits superior fusion performance and efficiency compared to alternative approaches. The code of LRFNet is available at https://github.com/HeDan-11/LRFNet.


Assuntos
Processamento de Imagem Assistida por Computador , Análise de Ondaletas
4.
Artigo em Inglês | MEDLINE | ID: mdl-38569917

RESUMO

This study aimed to introduce a three-dimensional (3D) images fusion method for preoperative simulation of aneurysm clipping. Consecutive unruptured aneurysm cases treated with surgical clipping from March 2021 to October 2023 were included. In all cases, preoperative images of plain computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) 3D fluid-attenuated inversion recovery, 3D heavily T2-weighted images, and 3D rotational angiography were acquired and transported into a commercial software (Ziostation2 Plus, Ziosoft, Inc. Tokyo, Japan). The software provided 3D images of skull, arteries including aneurysms, veins, and brain tissue that were freely rotated, magnified, trimmed, and superimposed. Using the 3D images fusion method, two operators predicted clips to be used in the following surgery. The predicted clips and actually used ones were compared to give agreement scores for the following factors: (1) type of clips (simple or fenestrated), (2) shape of clips (straight, curved, angled, or bayonet), and (3) clipping strategy (single or multiple). The agreement score ranged from 0 to 3 because a score of 1 or 0 was given for agreement or disagreement on each factor. Interoperator reproducibility was also evaluated. During the study period, 44 aneurysms from 37 patients were clipped. All procedures were successfully completed, thanks to the precisely reproduced surgical corridors with the 3D images fusion method. Agreement in clip prediction was good with mean agreement score of 2.4. Interobserver reproducibility was also high with the kappa value of 0.79. The 3D images fusion method was useful for preoperative simulation of aneurysm clipping.

5.
Sensors (Basel) ; 24(7)2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38610482

RESUMO

The objective of infrared and visual image fusion is to amalgamate the salient and complementary features of the infrared and visual images into a singular informative image. To accomplish this, we introduce a novel local-extrema-driven image filter designed to effectively smooth images by reconstructing pixel intensities based on their local extrema. This filter is iteratively applied to the input infrared and visual images, extracting multiple scales of bright and dark feature maps from the differences between continuously filtered images. Subsequently, the bright and dark feature maps of the infrared and visual images at each scale are fused using elementwise-maximum and elementwise-minimum strategies, respectively. The two base images, representing the final-scale smoothed images of the infrared and visual images, are fused using a novel structural similarity- and intensity-based strategy. Finally, our fusion image can be straightforwardly produced by combining the fused bright feature map, dark feature map, and base image together. Rigorous experimentation conducted on the widely used TNO dataset underscores the superiority of our method in fusing infrared and visual images. Our approach consistently performs on par or surpasses eleven state-of-the-art image-fusion methods, showcasing compelling results in both qualitative and quantitative assessments.

6.
Sensors (Basel) ; 24(7)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38610501

RESUMO

Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual's health conditions. Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras, lack strongly in terms of resolution and image quality compared with state-of-the-art photo cameras. In this article, a new method is demonstrated to superimpose multimodal image data onto a 3D model created by multi-view photogrammetry. While a high-resolution photo camera captures a set of images from varying view angles to reconstruct a detailed 3D model of the scene, low-resolution multimodal camera(s) simultaneously record the scene. All cameras are pre-calibrated and rigidly mounted on a rig, i.e., their imaging properties and relative positions are known. The method was realized in a laboratory setup consisting of a professional photo camera, a thermal camera, and a 12-channel multispectral camera. In our experiments, an accuracy better than one pixel was achieved for the data fusion using multimodal superimposition. Finally, application examples of multimodal 3D digitization are demonstrated, and further steps to system realization are discussed.

7.
R Soc Open Sci ; 11(4)2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38601031

RESUMO

With the rapid development of medical imaging methods, multimodal medical image fusion techniques have caught the interest of researchers. The aim is to preserve information from diverse sensors using various models to generate a single informative image. The main challenge is to derive a trade-off between the spatial and spectral qualities of the resulting fused image and the computing efficiency. This article proposes a fast and reliable method for medical image fusion depending on multilevel Guided edge-preserving filtering (MLGEPF) decomposition rule. First, each multimodal medical image was divided into three sublayer categories using an MLGEPF decomposition scheme: small-scale component, large-scale component and background component. Secondly, two fusion strategies-pulse-coupled neural network based on the structure tensor and maximum based-are applied to combine the three types of layers, based on the layers' various properties. The three different types of fused sublayers are combined to create the fused image at the end. A total of 40 pairs of brain images from four separate categories of medical conditions were tested in experiments. The pair of images includes various case studies including magnetic resonance imaging (MRI) , TITc, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). We included qualitative analysis to demonstrate that the visual contrast between the structure and the surrounding tissue is increased in our proposed method. To further enhance the visual comparison, we asked a group of observers to compare our method's outputs with other methods and score them. Overall, our proposed fusion scheme increased the visual contrast and received positive subjective review. Moreover, objective assessment indicators for each category of medical conditions are also included. Our method achieves a high evaluation outcome on feature mutual information (FMI), the sum of correlation of differences (SCD), Qabf and Qy indexes. This implies that our fusion algorithm has better performance in information preservation and efficient structural and visual transferring.

8.
Comput Biol Med ; 174: 108463, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38640634

RESUMO

Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.

9.
Sensors (Basel) ; 24(7)2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-38610428

RESUMO

NASA's Soil Moisture Active Passive (SMAP) was originally designed to combine high-resolution active (radar) and coarse-resolution but highly sensitive passive (radiometer) L-band observations to achieve unprecedented spatial resolution and accuracy for soil moisture retrievals. However, shortly after SMAP was put into orbit, the radar component failed, and the high-resolution capability was lost. In this paper, the integration of an alternative radar sensor with the SMAP radiometer is proposed to enhance soil moisture retrieval capabilities over vegetated areas in the absence of the original high-resolution radar in the SMAP mission. ESA's Sentinel-1A C-band radar was used in this study to enhance the spatial resolution of the SMAP L-band radiometer and to improve soil moisture retrieval accuracy. To achieve this purpose, we downscaled the 9 km radiometer data of the SMAP to 1 km utilizing the Smoothing Filter-based Intensity Modulation (SFIM) method. An Artificial Neural Network (ANN) was then trained to exploit the synergy between the Sentinel-1A radar, SMAP radiometer, and the in situ-measured soil moisture. An analysis of the data obtained for a plant growing season over the Mississippi Delta showed that the VH-polarized Sentinel-1A radar data can yield a coefficient of correlation of 0.81 and serve as a complimentary source to the SMAP radiometer for more accurate and enhanced soil moisture prediction over agricultural fields.

10.
Med Biol Eng Comput ; 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38656734

RESUMO

This paper proposes a medical image fusion method in the non-subsampled shearlet transform (NSST) domain to combine a gray-scale image with the respective pseudo-color image obtained through different imaging modalities. The proposed method applies a novel improved dual-channel pulse-coupled neural network (IDPCNN) model to fuse the high-pass sub-images, whereas the Prewitt operator is combined with maximum regional energy (MRE) to construct the fused low-pass sub-image. First, the gray-scale image and luminance of the pseudo-color image are decomposed using NSST to find the respective sub-images. Second, the low-pass sub-images are fused by the Prewitt operator and MRE-based rule. Third, the proposed IDPCNN is utilized to get the fused high-pass sub-images from the respective high-pass sub-images. Fourth, the luminance of the fused image is obtained by applying inverse NSST on the fused sub-images, which is combined with the chrominance components of the pseudo-color image to construct the fused image. A total of 28 diverse medical image pairs, 11 existing methods, and nine objective metrics are used in the experiment. Qualitative and quantitative fusion results show that the proposed method is competitive with and even outpaces some of the existing medical fusion approaches. It is also shown that the proposed method efficiently combines two gray-scale images.

11.
Artigo em Inglês | MEDLINE | ID: mdl-38580555

RESUMO

Precise recognition of the intraparotid facial nerve (IFN) is crucial during parotid tumor resection. We aimed to explore the application effect of direct visualization of the IFN in parotid tumor resection. Fifteen patients with parotid tumors were enrolled in this study and underwent specific radiological scanning in which the IFNs were displayed as high-intensity images. After image segmentation, IFN could be preoperatively directly visualized. Mixed reality combined with surgical navigation were applied to intraoperatively directly visualize the segmentation results as real-time three-dimensional holograms, guiding the surgeons in IFN dissection and tumor resection. Radiological visibility of the IFN, accuracy of image segmentation and postoperative facial nerve function were analyzed. The trunks of IFN were directly visible in radiological images for all patients. Of 37 landmark points on the IFN, 36 were accurately segmented. Four patients were classified as House-Brackmann Grade I postoperatively. Two patients with malignancies had postoperative long-standing facial paralysis. Direct visualization of IFN was a feasible novel method with high accuracy that could assist in recognition of IFN and therefore potentially improve the treatment outcome of parotid tumor resection.

12.
J Imaging Inform Med ; 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38528288

RESUMO

In this paper, a segmentation-based image fusion method is proposed for the fusion of MR and CT images to obtain a high contrast fused image that contains complementary information from both input images. The proposed method uses the fuzzy C-mean method to extract information about the skull from the CT image. This skull information is used to extract soft tissue information from the MR image. Both the skull information and the soft tissue information are then fused using the fusion rule. The efficiency of the proposed method over other state-of-the-art fusion methods is analyzed and compared using qualitative and quantitative analysis methods. Qualitative analysis shows the improvement in the contrast between the bone and the soft tissue using the proposed method over other state-of-the-art methods without introducing any artifacts or distortions. Classical and gradient-based quantitative analysis also show significant improvement in the fused image obtained using the proposed method over the five state-of-the-art methods. The percentage improvement in the standard deviation, average gradient, entropy, spatial frequency, QABF, and LABF of the proposed method over the best value obtained by the five state-of-the-art methods is 27.11%, 12.06%, 23.64%, 11.30%, 5.59%, and 13.70% respectively.

13.
Trials ; 25(1): 214, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38528619

RESUMO

BACKGROUND: Endovascular repair of aortic aneurysmal disease is established due to perceived advantages in patient survival, reduced postoperative complications, and shorter hospital lengths of stay. High spatial and contrast resolution 3D CT angiography images are used to plan the procedures and inform device selection and manufacture, but in standard care, the surgery is performed using image-guidance from 2D X-ray fluoroscopy with injection of nephrotoxic contrast material to visualise the blood vessels. This study aims to assess the benefit to patients, practitioners, and the health service of a novel image fusion medical device (Cydar EV), which allows this high-resolution 3D information to be available to operators at the time of surgery. METHODS: The trial is a multi-centre, open label, two-armed randomised controlled clinical trial of 340 patient, randomised 1:1 to either standard treatment in endovascular aneurysm repair or treatment using Cydar EV, a CE-marked medical device comprising of cloud computing, augmented intelligence, and computer vision. The primary outcome is procedural time, with secondary outcomes of procedural efficiency, technical effectiveness, patient outcomes, and cost-effectiveness. Patients with a clinical diagnosis of AAA or TAAA suitable for endovascular repair and able to provide written informed consent will be invited to participate. DISCUSSION: This trial is the first randomised controlled trial evaluating advanced image fusion technology in endovascular aortic surgery and is well placed to evaluate the effect of this technology on patient outcomes and cost to the NHS. TRIAL REGISTRATION: ISRCTN13832085. Dec. 3, 2021.


Assuntos
Aneurisma da Aorta Abdominal , Implante de Prótese Vascular , Procedimentos Endovasculares , Humanos , Aneurisma da Aorta Abdominal/diagnóstico por imagem , Aneurisma da Aorta Abdominal/cirurgia , Análise Custo-Benefício , Computação em Nuvem , Procedimentos Endovasculares/métodos , Implante de Prótese Vascular/efeitos adversos , Resultado do Tratamento , Estudos Retrospectivos , Ensaios Clínicos Controlados Aleatórios como Assunto , Estudos Multicêntricos como Assunto
14.
Sensors (Basel) ; 24(6)2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38543997

RESUMO

The fusion of infrared and visible images is a well-researched task in computer vision. These fusion methods create fused images replacing the manual observation of single sensor image, often deployed on edge devices for real-time processing. However, there is an issue of information imbalance between infrared and visible images. Existing methods often fail to emphasize temperature and edge texture information, potentially leading to misinterpretations. Moreover, these methods are computationally complex, and challenging for edge device adaptation. This paper proposes a method that calculates the distribution proportion of infrared pixel values, allocating fusion weights to adaptively highlight key information. It introduces a weight allocation mechanism and MobileBlock with a multispectral information complementary module, innovations which strengthened the model's fusion capabilities, made it more lightweight, and ensured information compensation. Training involves a temperature-color-perception loss function, enabling adaptive weight allocation based on image pair information. Experimental results show superiority over mainstream fusion methods, particularly in the electric power equipment scene and publicly available datasets.

15.
Sensors (Basel) ; 24(5)2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38474949

RESUMO

Beijing Satellite 3 is a high-performance optical remote sensing satellite with a spatial resolution of 0.3-0.5 m. It can provide timely and independent ultra-high-resolution spatial big data and comprehensive spatial information application services. At present, there is no relevant research on the fusion method of BJ-3A satellite images. In many applications, high-resolution panchromatic images alone are insufficient. Therefore, it is necessary to fuse them with multispectral images that contain spectral color information. Currently, there is a lack of research on the fusion method of BJ-3A satellite images. This article explores six traditional pixel-level fusion methods (HPF, HCS, wavelet, modified-IHS, PC, and Brovey) for fusing the panchromatic image and multispectral image of the BJ-3A satellite. The fusion results were analyzed qualitatively from two aspects: spatial detail enhancement capability and spectral fidelity. Five indicators, namely mean, standard deviation, entropy, correlation coefficient, and average gradient, were used for quantitative analysis. Finally, the fusion results were comprehensively evaluated from three aspects: spectral curves of ground objects, absolute error figure, and object-oriented classification effects. The findings of the research suggest that the fusion method known as HPF is the optimum and appropriate technique for fusing panchromatic and multispectral images obtained from BJ-3A. These results can be utilized as a guide for the implementation of BJ-3A panchromatic and multispectral data fusion in real-world scenarios.

16.
Sensors (Basel) ; 24(5)2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38475050

RESUMO

Latent Low-Rank Representation (LatLRR) has emerged as a prominent approach for fusing visible and infrared images. In this approach, images are decomposed into three fundamental components: the base part, salient part, and sparse part. The aim is to blend the base and salient features to reconstruct images accurately. However, existing methods often focus more on combining the base and salient parts, neglecting the importance of the sparse component, whereas we advocate for the comprehensive inclusion of all three parts generated from LatLRR image decomposition into the image fusion process, a novel proposition introduced in this study. Moreover, the effective integration of Convolutional Neural Network (CNN) technology with LatLRR remains challenging, particularly after the inclusion of sparse parts. This study utilizes fusion strategies involving weighted average, summation, VGG19, and ResNet50 in various combinations to analyze the fusion performance following the introduction of sparse parts. The research findings show a significant enhancement in fusion performance achieved through the inclusion of sparse parts in the fusion process. The suggested fusion strategy involves employing deep learning techniques for fusing both base parts and sparse parts while utilizing a summation strategy for the fusion of salient parts. The findings improve the performance of LatLRR-based methods and offer valuable insights for enhancement, leading to advancements in the field of image fusion.

17.
Curr Med Imaging ; 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38415461

RESUMO

BACKGROUND: At present, there are some problems in multimodal medical image fusion, such as texture detail loss, leading to edge contour blurring and image energy loss, leading to contrast reduction. OBJECTIVE: To solve these problems and obtain higher-quality fusion images, this study proposes an image fusion method based on local saliency energy and multi-scale fractal dimension. METHODS: First, by using a non-subsampled contourlet transform, the medical image was divided into 4 layers of high-pass subbands and 1 layer of low-pass subband. Second, in order to fuse the high-pass subbands of layers 2 to 4, the fusion rules based on a multi-scale morphological gradient and an activity measure were used as external stimuli in pulse coupled neural network. Third, a fusion rule based on the improved multi-scale fractal dimension and new local saliency energy was proposed, respectively, for the low-pass subband and the 1st closest to the low-pass subband. Layerhigh pass sub-bands were fused. Lastly, the fused image was created by performing the inverse non-subsampled contourlet transform on the fused sub-bands. RESULTS: On three multimodal medical image datasets, the proposed method was compared with 7 other fusion methods using 5 common objective evaluation metrics. CONCLUSION: Experiments showed that this method can protect the contrast and edge of fusion image well and has strong competitiveness in both subjective and objective evaluation.

18.
Phys Med Biol ; 69(5)2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38316044

RESUMO

Objective.Multimodal medical image fusion (MMIF) technologies merges diverse medical images with rich information, boosting diagnostic efficiency and accuracy. Due to global optimization and single-valued nature, convolutional sparse representation (CSR) outshines the standard sparse representation (SR) in significance. By addressing the challenges of sensitivity to highly redundant dictionaries and robustness to misregistration, an adaptive convolutional sparsity scheme with measurement of thesub-band correlationin the non-subsampled contourlet transform (NSCT) domain is proposed for MMIF.Approach.The fusion scheme incorporates four main components: image decomposition into two scales, fusion of detail layers, fusion of base layers, and reconstruction of the two scales. We solved a Tikhonov regularization optimization problem with source images to obtain the base and detail layers. Then, after CSR processing, detail layers were sparsely decomposed using pre-trained dictionary filters for initial coefficient maps. NSCT domain'ssub-band correlationwas used to refine fusion coefficient maps, and sparse reconstruction produced the fused detail layer. Meanwhile, base layers were fused using averaging. The final fused image was obtained via two-scale reconstruction.Main results.Experimental validation of clinical image sets revealed that the proposed fusion scheme can not only effectively eliminate the interference of partial misregistration, but also outperform the representative state-of-the-art fusion schemes in the preservation of structural and textural details according to subjective visual evaluations and objective quality evaluations.Significance. The proposed fusion scheme is competitive due to its low-redundancy dictionary, robustness to misregistration, and better fusion performance. This is achieved by training the dictionary with minimal samples through CSR to adaptively preserve overcompleteness for detail layers, and constructing fusion activity level withsub-band correlationin the NSCT domain to maintain CSR attributes. Additionally, ordering the NSCT for reverse sparse representation further enhancessub-band correlationto promote the preservation of structural and textural details.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Tecnologia , Processamento de Imagem Assistida por Computador/métodos
19.
Curr Med Imaging ; 20: 1-13, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38389343

RESUMO

BACKGROUND: Modern medical imaging modalities used by clinicians have many applications in the diagnosis of complicated diseases. These imaging technologies reveal the internal anatomy and physiology of the body. The fundamental idea behind medical image fusion is to increase the image's global and local contrast, enhance the visual impact, and change its format so that it is better suited for computer processing or human viewing while preventing noise magnification and accomplishing excellent real-time performance. OBJECTIVE: The top goal is to combine data from various modal images (CT/MRI and MR-T1/MR-T2) into a solitary image that, to the greatest degree possible, retains the key characteristics (prominent features) of the source images. METHODS: The clinical accuracy of medical issues is compromised because innumerable classical fusion methods struggle to conserve all the prominent features of the original images. Furthermore, complex implementation, high computation time, and more memory requirements are key problems of transform domain methods. With the purpose of solving these problems, this research suggests a fusion framework for multimodal medical images that makes use of a multi-scale edge-preserving filter and visual saliency detection. The source images are decomposed using a two-scale edge-preserving filter into base and detail layers. Base layers are combined using the addition fusion rule, while detail layers are fused using weight maps constructed using the maximum symmetric surround saliency detection algorithm. RESULTS: The resultant image constructed by the presumed method has improved objective evaluation metrics than other classical methods, as well as unhindered edge contour, more global contrast, and no ringing effect or artifacts. CONCLUSION: The methodology offers a dominant and symbiotic arsenal of clinical symptomatic, therapeutic, and biomedical research competencies that have the prospective to considerably strengthen medical practice and biological understanding.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Humanos , Estudos Prospectivos
20.
J Imaging Inform Med ; 37(2): 575-588, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38343225

RESUMO

Accurate delineation of the clinical target volume (CTV) is a crucial prerequisite for safe and effective radiotherapy characterized. This study addresses the integration of magnetic resonance (MR) images to aid in target delineation on computed tomography (CT) images. However, obtaining MR images directly can be challenging. Therefore, we employ AI-based image generation techniques to "intelligentially generate" MR images from CT images to improve CTV delineation based on CT images. To generate high-quality MR images, we propose an attention-guided single-loop image generation model. The model can yield higher-quality images by introducing an attention mechanism in feature extraction and enhancing the loss function. Based on the generated MR images, we propose a CTV segmentation model fusing multi-scale features through image fusion and a hollow space pyramid module to enhance segmentation accuracy. The image generation model used in this study improves the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) from 14.87 and 0.58 to 16.72 and 0.67, respectively, and improves the feature distribution distance and learning-perception image similarity from 180.86 and 0.28 to 110.98 and 0.22, achieving higher quality image generation. The proposed segmentation method demonstrates high accuracy, compared with the FCN method, the intersection over union ratio and the Dice coefficient are improved from 0.8360 and 0.8998 to 0.9043 and 0.9473, respectively. Hausdorff distance and mean surface distance decreased from 5.5573 mm and 2.3269 mm to 4.7204 mm and 0.9397 mm, respectively, achieving clinically acceptable segmentation accuracy. Our method might reduce physicians' manual workload and accelerate the diagnosis and treatment process while decreasing inter-observer variability in identifying anatomical structures.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...